JBoss Community Archive (Read Only)

WildFly 9

HTTP Services

This section summarises the HTTP-based clustering features.

Subsystem Support

This section describes the key clustering subsystems, JGroups and Infinispan. Say a few words about how they work together.

JGroups Subsystem

Purpose

The JGroups subsystem provides group communication support for HA services in the form of JGroups channels.

Named channel instances permit application peers in a cluster to communicate as a group and in such a way that the communication satisfies defined properties (e.g. reliable, ordered, failure-sensitive). Communication properties are configurable for each channel and are defined by the protocol stack used to create the channel. Protocol stacks consist of a base transport layer (used to transport messages around the cluster) together with a user-defined, ordered stack of protocol layers, where each protocol layer supports a given communication property.

The JGroups subsystem provides the following features:

  • allows definition of named protocol stacks

  • view run-time metrics associated with channels

  • specify a default stack for general use

In the following sections, we describe the JGroups subsystem.

JGroups channels are created transparently as part of the clustering functionality (e.g. on clustered application deployment, channels will be created behind the scenes to support clustered features such as session replication or transmission of SSO contexts around the cluster).

Configuration example

What follows is a sample JGroups subsystem configuration showing all of the possible elements and attributes which may be configured. We shall use this example to explain the meaning of the various elements and attributes.

The schema for the subsystem, describing all valid elements and attributes, can be found in the Wildfly distribution, in the docs/schema directory.

<subsystem xmlns="urn:jboss:domain:jgroups:2.0" default-stack="udp">
    <stack name="udp">
        <transport type="UDP" socket-binding="jgroups-udp" diagnostics-socket-binding="jgroups-diagnostics"
            default-executor="jgroups" oob-executor="jgroups-oob" timer-executor="jgroups-timer"
            shared="false" thread-factory="jgroups-thread-factory"
            machine="machine1" rack="rack1" site="site1"/>
        <protocol type="PING">
            <property name="timeout">100</property>
        </protocol>
        <protocol type="MERGE3"/>
        <protocol type="FD_SOCK" socket-binding="jgroups-udp-fd"/>
        <protocol type="FD"/>
        <protocol type="VERIFY_SUSPECT"/>
        <protocol type="pbcast.NAKACK2"/>
        <protocol type="UNICAST2"/>
        <protocol type="pbcast.STABLE"/>
        <protocol type="pbcast.GMS"/>
        <protocol type="UFC"/>
        <protocol type="MFC"/>
        <protocol type="FRAG2"/>
        <protocol type="RSVP"/>
    </stack>
    <stack name="tcp">
        <transport type="TCP" socket-binding="jgroups-tcp"/>
        <protocol type="MPING" socket-binding="jgroups-mping"/>
        <protocol type="MERGE2"/>
        <protocol type="FD_SOCK" socket-binding="jgroups-tcp-fd"/>
        <protocol type="FD"/>
        <protocol type="VERIFY_SUSPECT"/>
        <protocol type="pbcast.NAKACK2"/>
        <protocol type="UNICAST2"/>
        <protocol type="pbcast.STABLE"/>
        <protocol type="pbcast.GMS"/>
        <protocol type="MFC"/>
        <protocol type="FRAG2"/>
        <protocol type="RSVP"/>
    </stack>
    <stack name="udp-xsite">
        <transport type="UDP" socket-binding="jgroups-udp"/>
        <protocol type="PING" socket-binding="jgroups-mping"/>
        <protocol type="MERGE2"/>
        <protocol type="FD_SOCK" socket-binding="jgroups-tcp-fd"/>
        <protocol type="FD"/>
        <protocol type="VERIFY_SUSPECT"/>
        <protocol type="pbcast.NAKACK2"/>
        <protocol type="UNICAST2"/>
        <protocol type="pbcast.STABLE"/>
        <protocol type="pbcast.GMS"/>
        <protocol type="MFC"/>
        <protocol type="FRAG2"/>
        <protocol type="RSVP"/>
        <relay site="LONDON">
            <remote-site name="SFO" stack="tcp" cluster="global"/>
            <remote-site name="NYC" stack="tcp" cluster="global"/>
        </relay>
    </stack>
</subsystem>

<subsystem>

This element is used to configure the subsystem within a Wildfly system profile.

  • xmlns This attribute specifies the XML namespace of the JGroups subsystem and, in particular, its version.

  • default-stack This attribute is used to specify a default stack for the JGroups subsystem. This default stack will be used whenever a stack is required but no stack is specified.

<stack>

This element is used to configure a JGroups protocol stack.

  • name This attribute is used to specify the name of the stack.

<transport>

This element is used to configure the transport layer (required) of the protocol stack.

  • type This attribute specifies the transport type (e.g. UDP, TCP, TCPGOSSIP)

  • socket-binding This attribute references a defined socket binding in the server profile. It is used when JGroups needs to create general sockets internally.

  • diagnostics-socket-binding This attribute references a defined socket binding in the server profile. It is used when JGroups needs to create sockets for use with the diagnostics program. For more about the use of diagnostics, see the JGroups documentation for probe.sh.

  • default-executor This attribute references a defined thread pool executor in the threads subsystem. It governs the allocation and execution of runnable tasks to handle incoming JGroups messages.

  • oob-executor This attribute references a defined thread pool executor in the threads subsystem. It governs the allocation and execution of runnable tasks to handle incoming JGroups OOB (out-of-bound) messages.

  • timer-executor This attribute references a defined thread pool executor in the threads subsystem. It governs the allocation and execution of runnable timer-related tasks.

  • shared This attribute indicates whether or not this transport is shared amongst several JGroups stacks or not.

  • thread-factory This attribute references a defined thread factory in the threads subsystem. It governs the allocation of threads for running tasks which are not handled by the executors above.

  • site This attribute defines a site (data centre) id for this node.

  • rack This attribute defines a rack (server rack) id for this node.

  • machine This attribute defines a machine (host) is for this node.

site, rack and machine ids are used by the Infinispan topology-aware consistent hash function, which when using dist mode, prevents dist mode replicas from being stored on the same host, rack or site

.

<property>

This element is used to configure a transport property.

  • name This attribute specifies the name of the protocol property. The value is provided as text for the property element.

<protocol>

This element is used to configure a (non-transport) protocol layer in the JGroups stack. Protocol layers are ordered within the stack.

  • type This attribute specifies the name of the JGroups protocol implementation (e.g. MPING, pbcast.GMS), with the package prefix org.jgroups.protocols removed.

  • socket-binding This attribute references a defined socket binding in the server profile. It is used when JGroups needs to create general sockets internally for this protocol instance.

<property>

This element is used to configure a protocol property.

  • name This attribute specifies the name of the protocol property. The value is provided as text for the property element.

<relay>

This element is used to configure the RELAY protocol for a JGroups stack. RELAY is a protocol which provides cross-site replication between defined sites (data centres). In the RELAY protocol, defined sites specify the names of remote sites (backup sites) to which their data should be backed up. Channels are defined between sites to permit the RELAY protocol to transport the data from the current site to a backup site.

  • site This attribute specifies the name of the current site. Site names can be referenced elsewhere (e.g. in the JGroups remote-site configuration elements, as well as backup configuration elements in the Infinispan subsystem)

<remote-site>

This element is used to configure a remote site for the RELAY protocol.

  • name This attribute specifies the name of the remote site to which this configuration applies.

  • stack This attribute specifies a JGroups protocol stack to use for communication between this site and the remote site.

  • cluster This attribute specifies the name of the JGroups channel to use for communication between this site and the remote site.

Use Cases

In many cases, channels will be configured via XML as in the example above, so that the channels will be available upon server startup. However, channels may also be added, removed or have their configurations changed in a running server by making use of the Wildfly management API command-line interface (CLI). In this section, we present some key use cases for the JGroups management API.

The key use cases covered are:

  • adding a stack

  • adding a protocol to an existing stack

  • adding a property to a protocol

The Wildfly management API command-line interface (CLI) itself can be used to provide extensive information on the attributes and commands available in the JGroups subsystem interface used in these examples.

Add a stack

/subsystem=jgroups/stack=mystack:add(transport={}, protocols={})

Add a protocol to a stack

/subsystem=jgroups/stack=mystack/transport=TRANSPORT:add(type=<type>, socket-binding=<socketbinding>)
/subsystem=jgroups/stack=mystack:add-protocol(type=<type>, socket-binding=<socketbinding>)

Add a property to a protocol

/subsystem=jgroups/stack=mystack/transport=TRANSPORT/property=<property>:add(value=<value>)

Infinispan Subsystem

Purpose

The Infinispan subsystem provides caching support for HA services in the form of Infinispan caches:  high-performance, transactional caches which can operate in both non-distributed and distributed scenarios. Distributed caching support is used in the provision of many key HA services. For example, the failover of a session-oriented client HTTP request from a failing node to a new (failover) node depends on session data for the client being available on the new node. In other words, the client session data needs to be replicated across nodes in the cluster. This is effectively achieved via a distributed Infinispan cache. This approach to providing fail-over also applies to EJB SFSB sessions. Over and above providing support for fail-over, an underlying cache is also required when providing second-level caching for entity beans using Hibernate, and this case is also handled through the use of an Infinispan cache.

The Infinispan subsystem provides the following features:

  • allows definition and configuration of named cache containers and caches

  • view run-time metrics associated with cache container and cache instances

In the following sections, we describe the Infinispan subsystem.

Infiispan cache containers and caches are created transparently as part of the clustering functionality (e.g. on clustered application deployment, cache containers and their associated caches will be created behind the scenes to support clustered features such as session replication or caching of entities around the cluster).

Configuration Example

In this section, we provide an example XML configuration of the infinispan subsystem and review the configuration elements and attributes.

The schema for the subsystem, describing all valid elements and attributes, can be found in the Wildfly distribution, in the docs/schema directory.

 

<subsystem xmlns="urn:jboss:domain:infinispan:2.0">
  <cache-container name="server" aliases="singleton cluster" default-cache="default" module="org.wildfly.clustering.server">
      <transport lock-timeout="60000"/>
      <replicated-cache name="default" mode="SYNC" batching="true">
          <locking isolation="REPEATABLE_READ"/>
      </replicated-cache>
  </cache-container>
  <cache-container name="web" aliases="standard-session-cache" default-cache="repl" module="org.wildfly.clustering.web.infinispan">
      <transport lock-timeout="60000"/>
      <replicated-cache name="repl" mode="ASYNC" batching="true">
          <file-store/>
      </replicated-cache>
      <replicated-cache name="sso" mode="SYNC" batching="true"/>
      <distributed-cache name="dist" mode="ASYNC" batching="true" l1-lifespan="0">
          <file-store/>
      </distributed-cache>
  </cache-container>
  <cache-container name="ejb" aliases="sfsb sfsb-cache" default-cache="repl" module="org.jboss.as.clustering.ejb3.infinispan">
      <transport lock-timeout="60000"/>
      <replicated-cache name="repl" mode="ASYNC" batching="true">
          <eviction strategy="LRU" max-entries="10000"/>
          <file-store/>
      </replicated-cache>
      <!--
        ~  Clustered cache used internally by EJB subsytem for managing the client-mapping(s) of
        ~                 the socketbinding referenced by the EJB remoting connector
        -->
      <replicated-cache name="remote-connector-client-mappings" mode="SYNC" batching="true"/>
      <distributed-cache name="dist" mode="ASYNC" batching="true" l1-lifespan="0">
          <eviction strategy="LRU" max-entries="10000"/>
          <file-store/>
      </distributed-cache>
  </cache-container>
  <cache-container name="hibernate" default-cache="local-query" module="org.hibernate">
      <transport lock-timeout="60000"/>
      <local-cache name="local-query">
          <transaction mode="NONE"/>
          <eviction strategy="LRU" max-entries="10000"/>
          <expiration max-idle="100000"/>
      </local-cache>
      <invalidation-cache name="entity" mode="SYNC">
          <transaction mode="NON_XA"/>
          <eviction strategy="LRU" max-entries="10000"/>
          <expiration max-idle="100000"/>
      </invalidation-cache>
      <replicated-cache name="timestamps" mode="ASYNC">
          <transaction mode="NONE"/>
          <eviction strategy="NONE"/>
       </replicated-cache>
  </cache-container>
</subsystem>

<cache-container>

This element is used to configure a cache container.

  • name This attribute is used to specify the name of the cache container.

  • default-cache This attribute configures the default cache to be used, when no cache is otherwise specified.

  • listener-executor This attribute references a defined thread pool executor in the threads subsystem. It governs the allocation and execution of runnable tasks in the replication queue.

  • eviction-executor This attribute references a defined thread pool executor in the threads subsystem. It governs the allocation and execution of runnable tasks to handle evictions.

  • replication-queue-executor This attribute references a defined thread pool executor in the threads subsystem. It governs the allocation and execution of runnable tasks to handle asynchronous cache operations.

  • jndi-name This attribute is used to assign a name for the cache container in the JNDI name service.

  • module This attribute configures the module whose class loader should be used when building this cache container's configuration.

  • start This attribute configured the cache container start mode and has since been deprecated, the only supported and the default value is LAZY (on-demand start).

  • aliases This attribute is used to define aliases for the cache container name.

This element has the following child elements: <transport>, <local-cache>, <invalidation-cache>, <replicated-cache>, and <distributed-cache>.

<transport>

This element is used to configure the JGroups transport used by the cache container, when required.

  • stack This attribute configures the JGroups stack to be used for the transport. If none is specified, the default-stack for the JGroups subsystem is used.

  • cluster This attribute configures the name of the group communication cluster. This is the name which will be seen in debugging logs.

  • executor This attribute references a defined thread pool executor in the threads subsystem. It governs the allocation and execution of runnable tasks to handle ? <fill me in >?.

  • lock-timeout This attribute configures the time-out to be used when obtaining locks for the transport.

  • site This attribute configures the site id of the cache container.

  • rack This attribute configures the rack id of the cache container.

  • machine This attribute configures the machine id of the cache container.

    The presence of the transport element is required when operating in clustered mode

The remaining child elements of <cache-container>, namely <local-cache>, <invalidation-cache>, <replicated-cache> and <distributed-cache>, each configures one of four key cache types or classifications.

These cache-related elements are actually part of an xsd hierarchy with abstract complexTypes cache, clustered-cache, and shared-cache. In order to simplify the presentation, we notate these as pseudo-elements <abstract cache>, <abstract clustered-cache> and <abstract shared-cache>. In what follows, we first describe the extension hierarchy of base elements, and then show how the cache type elements relate to them.

<abstract cache>

This abstract base element defines the attributes and child elements common to all non-clustered caches. 

  • name This attribute configures the name of the cache. This name may be referenced by other subsystems.

  • start This attribute configured the cache container start mode and has since been deprecated, the only supported and the default value is LAZY (on-demand start).

  • batching This attribute configures batching. If enabled, the invocation batching API will be made available for this cache.

  • indexing This attribute configures indexing. If enabled, entries will be indexed when they are added to the cache. Indexes will be updated as entries change or are removed.

  • jndi-name This attribute is used to assign a name for the cache in the JNDI name service.

  • module This attribute configures the module whose class loader should be used when building this cache container's configuration.

The <abstract cache> abstract base element has the following child elements: <indexing-properties>, <locking>, <transaction>, <eviction>, <expiration>, <store>, <file-store>, <string-keyed-jdbc-store>, <binary-keyed-jdbc-store>, <mixed-keyed-jdbc-store>, <remote-store>.

<indexing-properties>

This child element defines properties to control indexing behaviour.

<locking>

This child element configures the locking behaviour of the cache.

  • isolation This attribute the cache locking isolation level. Allowable values are NONE, SERIALIZABLE, REPEATABLE_READ, READ_COMMITTED, READ_UNCOMMITTED.

  • striping If true, a pool of shared locks is maintained for all entries that need to be locked. Otherwise, a lock is created per entry in the cache. Lock striping helps control memory footprint but may reduce concurrency in the system.

  • acquire-timeout This attribute configures the maximum time to attempt a particular lock acquisition.

  • concurrency-level This attribute is used to configure the concurrency level. Adjust this value according to the number of concurrent threads interacting with Infinispan.

<transaction>

This child element configures the transactional behaviour of the cache.

  • mode This attribute configures the transaction mode, setting the cache transaction mode to one of NONE, NON_XA, NON_DURABLE_XA, FULL_XA.

  • stop-timeout If there are any ongoing transactions when a cache is stopped, Infinispan waits for ongoing remote and local transactions to finish. The amount of time to wait for is defined by the cache stop timeout.

  • locking This attribute configures the locking mode for this cache, one of OPTIMISTIC or PESSIMISTIC.

<eviction>

This child element configures the eviction behaviour of the cache.

  • strategy This attribute configures the cache eviction strategy. Available options are 'UNORDERED', 'FIFO', 'LRU', 'LIRS' and 'NONE' (to disable eviction).

  • max-entries This attribute configures the maximum number of entries in a cache instance. If selected value is not a power of two the actual value will default to the least power of two larger than selected value. -1 means no limit.

<expiration>

This child element configures the expiration behaviour of the cache.

  • max-idle This attribute configures the maximum idle time a cache entry will be maintained in the cache, in milliseconds. If the idle time is exceeded, the entry will be expired cluster-wide. -1 means the entries never expire.

  • lifespan This attribute configures the maximum lifespan of a cache entry, after which the entry is expired cluster-wide, in milliseconds. -1 means the entries never expire.

  • interval This attribute specifies the interval (in ms) between subsequent runs to purge expired entries from memory and any cache stores. If you wish to disable the periodic eviction process altogether, set wakeupInterval to -1.

The remaining child elements of the abstract base element <cache>, namely <store>, <file-store>, <remote-store>, <string-keyed-jdbc-store>, <binary-keyed-jdbc-store> and <mixed-keyed-jdbc-store>, each configures one of six key cache store types.

These cache store-related elements are actually part of an xsd extension hierarchy with abstract complexTypes base-store and base-jdbc-store. As before, in order to simplify the presentation, we notate these as pseudo-elements <abstract base-store> and <abstract base-jdbc-store>.  In what follows, we first describe the extension hierarchy of base elements, and then show how the cache store elements relate to them.

<abstract base-store>

This abstract base element defines the attributes and child elements common to all cache stores.

  • shared This attribute should be set to true when multiple cache instances share the same cache store (e.g. multiple nodes in a cluster using a JDBC-based CacheStore pointing to the same, shared database) Setting this to true avoids multiple cache instances writing the same modification multiple times. If enabled, only the node where the modification originated will write to the cache store. If disabled, each individual cache reacts to a potential remote update by storing the data to the cache store.

  • preload This attribute configures whether or not, when the cache starts, data stored in the cache loader will be pre-loaded into memory. This is particularly useful when data in the cache loader is needed immediately after start-up and you want to avoid cache operations being delayed as a result of loading this data lazily. Can be used to provide a 'warm-cache' on start-up, however there is a performance penalty as start-up time is affected by this process. Note that pre-loading is done in a local fashion, so any data loaded is only stored locally in the node. No replication or distribution of the preloaded data happens. Also, Infinispan only pre-loads up to the maximum configured number of entries in eviction.

  • passivation If true, data is only written to the cache store when it is evicted from memory, a phenomenon known as passivation. Next time the data is requested, it will be 'activated' which means that data will be brought back to memory and removed from the persistent store. If false, the cache store contains a copy of the cache contents in memory, so writes to cache result in cache store writes. This essentially gives you a 'write-through' configuration.

  • fetch-state This attribute, if true, causes persistent state to be fetched when joining a cluster. If multiple cache stores are chained, only one of them can have this property enabled.

  • purge This attribute configures whether the cache store is purged upon start-up.

  • singleton This attribute configures whether or not the singleton store cache store is enabled. SingletonStore is a delegating cache store used for situations when only one instance in a cluster should interact with the underlying store.

  • class This attribute configures a custom store implementation class to use for this cache store.

  • properties This attribute is used to configure a list of cache store properties.

The abstract base element has one child element: <write-behind>

<write-behind>

This element is used to configure a cache store as write-behind instead of write-through. In write-through mode, writes to the cache are also synchronously written to the cache store, whereas in write-behind mode, writes to the cache are followed by asynchronous writes to the cache store.

  • flush-lock-timeout This attribute configures the time-out for acquiring the lock which guards the state to be flushed to the cache store periodically.

  • modification-queue-size This attribute configures the maximum number of entries in the asynchronous queue. When the queue is full, the store becomes write-through until it can accept new entries.

  • shutdown-timeout This attribute configures the time-out (in ms) to stop the cache store.

  • thread-pool This attribute is used to configure the size of the thread pool whose threads are responsible for applying the modifications to the cache store.

<abstract base-jdbc-store> extends <abstract base-store>

This abstract base element defines the attributes and child elements common to all JDBC-based cache stores.

  • datasource This attribute configures the datasource for the JDBC-based cache store.

  • entry-table This attribute configures the database table used to store cache entries.

  • bucket-table This attribute configures the database table used to store binary cache entries.

<file-store> extends <abstract base-store>

This child element is used to configure a file-based cache store. This requires specifying the name of the file to be used as backing storage for the cache store. 

  • relative-to This attribute optionally configures a relative path prefix for the file store path. Can be null.

  • path This attribute configures an absolute path to a file if relative-to is null; configures a relative path to the file, in relation to the value for relative-to, otherwise.

<remote-store> extends <abstract base-store>

This child element of cache is used to configure a remote cache store. It has a child <remote-servers>.

  • cache This attribute configures the name of the remote cache to use for this remote store.

  • tcp-nodelay This attribute configures a TCP_NODELAY value for communication with the remote cache.

  • socket-timeout This attribute configures a socket time-out for communication with the remote cache.

<remote-servers>

This child element of cache configures a list of remote servers for this cache store.

<remote-server>

This element configures a remote server. A remote server is defined completely by a locally defined outbound socket binding, through which communication is made with the server.

  • outbound-socket-binding This attribute configures an outbound socket binding for a remote server.

<local-cache> extends <abstract cache>

This element configures a local cache.

<abstract clustered-cache> extends <abstract cache>

This abstract base element defines the attributes and child elements common to all clustered caches. A clustered cache is a cache which spans multiple nodes in a cluster. It inherits from <cache>, so that all attributes and elements of <cache> are also defined for <clustered-cache>.

  • async-marshalling This attribute configures async marshalling. If enabled, this will cause marshalling of entries to be performed asynchronously.

  • mode This attribute configures the clustered cache mode, ASYNC for asynchronous operation, or SYNC for synchronous operation.

  • queue-size In ASYNC mode, this attribute can be used to trigger flushing of the queue when it reaches a specific threshold.

  • queue-flush-interval In ASYNC mode, this attribute controls how often the asynchronous thread used to flush the replication queue runs. This should be a positive integer which represents thread wakeup time in milliseconds.

  • remote-timeout In SYNC mode, this attribute (in ms) used to wait for an acknowledgement when making a remote call, after which the call is aborted and an exception is thrown.

<invalidation-cache> extends <abstract clustered-cache>

This element configures an invalidation cache.  

<abstract shared-cache> extends <abstract clustered-cache>

This abstract base element defines the attributes and child elements common to all shared caches. A shared cache is a clustered cache which shares state with its cache peers in the cluster. It inherits from <clustered-cache>, so that all attributes and elements of <clustered-cache> are also defined for <shared-cache>.

<state-transfer>
  • enabled If enabled, this will cause the cache to ask neighbouring caches for state when it starts up, so the cache starts 'warm', although it will impact start-up time.

  • timeout This attribute configures the maximum amount of time (ms) to wait for state from neighbouring caches, before throwing an exception and aborting start-up.

  • chunk-size This attribute configures the size, in bytes, in which to batch the transfer of cache entries.

<backups>
<backup>
  • strategy This attribute configures the backup strategy for this cache. Allowable values are SYNC, ASYNC.

  • failure-policy This attribute configures the policy to follow when connectivity to the backup site fails. Allowable values are IGNORE, WARN, FAIL, CUSTOM.

  • enabled This attribute configures whether or not this backup is enabled. If enabled, data will be sent to the backup site; otherwise, the backup site will be effectively ignored.

  • timeout This attribute configures the time-out for replicating to the backup site.

  • after-failures This attribute configures the number of failures after which this backup site should go off-line.

  • min-wait This attribute configures the minimum time (in milliseconds) to wait after the max number of failures is reached, after which this backup site should go off-line.

<backup-for>
  • remote-cache This attribute configures the name of the remote cache for which this cache acts as a backup.

  • remote-site This attribute configures the site of the remote cache for which this cache acts as a backup.

<replicated-cache> extends <abstract shared-cache>

This element configures a replicated cache. With a replicated cache, all contents (key-value pairs) of the cache are replicated on all nodes in the cluster.

<distributed-cache> extends <abstract shared-cache>

This element configures a distributed cache. With a distributed cache, contents of the cache are selectively replicated on nodes in the cluster, according to the number of owners specified.

  • owners This attribute configures the number of cluster-wide replicas for each cache entry.

  • segments This attribute configures the number of hash space segments which is the granularity for key distribution in the cluster. Value must be strictly positive.

  • l1-lifespan This attribute configures the maximum lifespan of an entry placed in the L1 cache. Configures the L1 cache behaviour in 'distributed' caches instances. In any other cache modes, this element is ignored.

Use Cases

In many cases, cache containers and caches will be configured via XML as in the example above, so that they will be available upon server start-up. However, cache containers and caches may also be added, removed or have their configurations changed in a running server by making use of the Wildfly management API command-line interface (CLI). In this section, we present some key use cases for the Infinispan management API.

The key use cases covered are:

  • adding a cache container

  • adding a cache to an existing cache container

  • configuring the transaction subsystem of a cache

    The Wildfly management API command-line interface (CLI) can be used to provide extensive information on the attributes and commands available in the Infinispan subsystem interface used in these examples.

Add a cache container

/subsystem=infinispan/cache-container=mycontainer:add(default-cache=<default-cache-name>)
/subsystem=infinispan/cache-container=mycontainer/transport=TRANSPORT:add(lock-timeout=<timeout>)

Add a cache

/subsystem=infinispan/cache-container=mycontainer/local-cache=mylocalcache:add()

Configure the transaction component of a cache

/subsystem=infinispan/cache-container=mycontainer/local-cache=mylocalcache/transaction=TRANSACTION:add(mode=<transaction-mode>)

Clustered Web Sessions

Clustered SSO

Load Balancing

This section describes load balancing via Apache + mod_jk and Apache + mod_cluster.

Load balancing with Apache + mod_jk

Describe load balancing with Apache using mod_jk.

Load balancing with Apache + mod_cluster

Describe load balancing with Apache using mod_cluster.

mod_cluster Subsystem

The mod_cluster integration is done via the modcluster subsystem it requires mod_cluster-1.1.x.or mod_cluster-1.2.x (since 7.1.0)

The modcluster subsystem supports several operations:

[standalone@localhost:9999 subsystem=modcluster] :read-operation-names
{
    "outcome" => "success",
    "result" => [
        "add",
        "add-custom-metric",
        "add-metric",
        "add-proxy",
        "disable",
        "disable-context",
        "enable",
        "enable-context",
        "list-proxies",
        "read-attribute",
        "read-children-names",
        "read-children-resources",
        "read-children-types",
        "read-operation-description",
        "read-operation-names",
        "read-proxies-configuration",
        "read-proxies-info",
        "read-resource",
        "read-resource-description",
        "refresh",
        "remove-custom-metric",
        "remove-metric",
        "remove-proxy",
        "reset",
        "stop",
        "stop-context",
        "validate-address",
        "write-attribute"
    ]
}
The operations specific to the modcluster subsystem are divided in 3 categories the ones that affects the configuration and require a restart of the subsystem, the one that just modify the behaviour temporarily and the ones that display information from the httpd part.

operations displaying httpd informations

There are 2 operations that display how Apache httpd sees the node:

read-proxies-configuration

Send a DUMP message to all Apache httpd the node is connected to and display the message received from Apache httpd.

[standalone@localhost:9999 subsystem=modcluster] :read-proxies-configuration
{
    "outcome" => "success",
    "result" => [
        "neo3:6666",
        "balancer: [1] Name: mycluster Sticky: 1 [JSESSIONID]/[jsessionid] remove: 0 force: 1 Timeout: 0 Maxtry: 1
node: [1:1],Balancer: mycluster,JVMRoute: 498bb1f0-00d9-3436-a341-7f012bc2e7ec,Domain: [],Host: 127.0.0.1,Port: 8080,Type: http,flushpackets: 0,flushwait: 10,ping: 10,smax: 26,ttl: 60,timeout: 0
host: 1 [example.com] vhost: 1 node: 1
host: 2 [localhost] vhost: 1 node: 1
host: 3 [default-host] vhost: 1 node: 1
context: 1 [/myapp] vhost: 1 node: 1 status: 1
context: 2 [/] vhost: 1 node: 1 status: 1
",
        "jfcpc:6666",
        "balancer: [1] Name: mycluster Sticky: 1 [JSESSIONID]/[jsessionid] remove: 0 force: 1 Timeout: 0 maxAttempts: 1
node: [1:1],Balancer: mycluster,JVMRoute: 498bb1f0-00d9-3436-a341-7f012bc2e7ec,LBGroup: [],Host: 127.0.0.1,Port: 8080,Type: http,flushpackets: 0,flushwait: 10,ping: 10,smax: 26,ttl: 60,timeout: 0
host: 1 [default-host] vhost: 1 node: 1
host: 2 [localhost] vhost: 1 node: 1
host: 3 [example.com] vhost: 1 node: 1
context: 1 [/] vhost: 1 node: 1 status: 1
context: 2 [/myapp] vhost: 1 node: 1 status: 1
"
    ]
}
read-proxies-info

Send a INFO message to all Apache httpd the node is connected to and display the message received from Apache httpd.

[standalone@localhost:9999 subsystem=modcluster] :read-proxies-info
{
    "outcome" => "success",
    "result" => [
        "neo3:6666",
        "Node: [1],Name: 498bb1f0-00d9-3436-a341-7f012bc2e7ec,Balancer: mycluster,Domain: ,Host: 127.0.0.1,Port: 8080,Type: http,Flushpackets: Off,Flushwait: 10000,Ping: 10000000,Smax: 26,Ttl: 60000000,Elected: 0,Read: 0,Transfered: 0,Connected: 0,Load: -1
Vhost: [1:1:1], Alias: example.com
Vhost: [1:1:2], Alias: localhost
Vhost: [1:1:3], Alias: default-host
Context: [1:1:1], Context: /myapp, Status: ENABLED
Context: [1:1:2], Context: /, Status: ENABLED
",
        "jfcpc:6666",
        "Node: [1],Name: 498bb1f0-00d9-3436-a341-7f012bc2e7ec,Balancer: mycluster,LBGroup: ,Host: 127.0.0.1,Port: 8080,Type: http,Flushpackets: Off,Flushwait: 10,Ping: 10,Smax: 26,Ttl: 60,Elected: 0,Read: 0,Transfered: 0,Connected: 0,Load: 1
Vhost: [1:1:1], Alias: default-host
Vhost: [1:1:2], Alias: localhost
Vhost: [1:1:3], Alias: example.com
Context: [1:1:1], Context: /, Status: ENABLED
Context: [1:1:2], Context: /myapp, Status: ENABLED
"
    ]
}

operations that handle the proxies the node is connected too

there are 3 operation that could be used to manipulate the list of Apache httpd the node is connected too.

list-proxies:

Displays the httpd that are connected to the node. The httpd could be discovered via the Advertise protocol or via the proxy-list attribute.

[standalone@localhost:9999 subsystem=modcluster] :list-proxies
{
    "outcome" => "success",
    "result" => [
        "neo3:6666",
        "jfcpc:6666"
    ]
}
remove-proxy

Remove a proxy from the discovered proxies or  temporarily from the proxy-list attribute.

[standalone@localhost:9999 subsystem=modcluster] :remove-proxy(host=jfcpc, port=6666)
{"outcome" => "success"}
proxy

Add a proxy to the discovered proxies or  temporarily to the proxy-list attribute.

[standalone@localhost:9999 subsystem=modcluster] :add-proxy(host=jfcpc, port=6666)
{"outcome" => "success"}

Context related operations

Those operations allow to send context related commands to Apache httpd. They are send automatically when deploying or undeploying  webapps.

enable-context

Tell Apache httpd that the context is ready receive requests.

[standalone@localhost:9999 subsystem=modcluster] :enable-context(context=/myapp, virtualhost=default-host)
{"outcome" => "success"}
disable-context

Tell Apache httpd that it shouldn't send new session requests to the context of the virtualhost.

[standalone@localhost:9999 subsystem=modcluster] :disable-context(context=/myapp, virtualhost=default-host)
{"outcome" => "success"}
stop-context

Tell Apache httpd that it shouldn't send requests to the context of the virtualhost.

[standalone@localhost:9999 subsystem=modcluster] :stop-context(context=/myapp, virtualhost=default-host, waittime=50)
{"outcome" => "success"}

Node related operations

Those operations are like the context operation but they apply to all webapps running on the node and operation that affect the whole node.

refresh

Refresh the node by sending a new CONFIG message to Apache httpd.

reset

reset the connection between Apache httpd and the node

Configuration

Metric configuration

There are 4 metric operations corresponding to add and remove load metrics to the dynamic-load-provider. Note that when nothing is defined a simple-load-provider is use with a fixed load factor of one.

[standalone@localhost:9999 subsystem=modcluster] :read-resource(name=mod-cluster-config)
{
    "outcome" => "success",
    "result" => {"simple-load-provider" => {"factor" => "1"}}
}

that corresponds to the following configuration:

<subsystem xmlns="urn:jboss:domain:modcluster:1.0">
            <mod-cluster-config>
                <simple-load-provider factor="1"/>
            </mod-cluster-config>
 </subsystem>
metric

Add a metric to the dynamic-load-provider, the dynamic-load-provider in configuration is created if needed.

[standalone@localhost:9999 subsystem=modcluster] :add-metric(type=cpu)
{"outcome" => "success"}
[standalone@localhost:9999 subsystem=modcluster] :read-resource(name=mod-cluster-config)
{
    "outcome" => "success",
    "result" => {
        "dynamic-load-provider" => {
            "history" => 9,
            "decay" => 2,
            "load-metric" => [{
                "type" => "cpu"
            }]
        }
    }
}
remove-metric

Remove a metric from the dynamic-load-provider.

[standalone@localhost:9999 subsystem=modcluster] :remove-metric(type=cpu)
{"outcome" => "success"}
custom-metric / remove-custom-metric

like the add-metric and remove-metric except they require a class parameter instead the type. Usually they needed additional properties which can be specified

[standalone@localhost:9999 subsystem=modcluster] :add-custom-metric(class=myclass, property=[("pro1" => "value1"), ("pro2" => "value2")]
{"outcome" => "success"}

which corresponds the following in the xml configuration file:

<subsystem xmlns="urn:jboss:domain:modcluster:1.0">
            <mod-cluster-config>
                <dynamic-load-provider history="9" decay="2">
                    <custom-load-metric class="myclass">
                        <property name="pro1" value="value1"/>
                        <property name="pro2" value="value2"/>
                    </custom-load-metric>
                </dynamic-load-provider>
            </mod-cluster-config>
</subsystem>
JVMRoute configuration

If you want to use your own JVM route instead of automatically generated one you can insert the following property:

...
</extensions>
<system-properties>
   <property name="jboss.mod_cluster.jvmRoute" value="myJvmRoute"/>
</system-properties>
<management>
...
JBoss.org Content Archive (Read Only), exported from JBoss Community Documentation Editor at 2020-03-13 13:55:53 UTC, last content change 2013-07-25 18:52:37 UTC.